20 research outputs found

    Self-Adjusting Evolutionary Algorithms for Multimodal Optimization

    Full text link
    Recent theoretical research has shown that self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces. However, the vast majority of these studies focuses on unimodal functions which do not require the algorithm to flip several bits simultaneously to make progress. In fact, existing self-adjusting algorithms are not designed to detect local optima and do not have any obvious benefit to cross large Hamming gaps. We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms (both with and without prior self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected runtime on the well-known Jump benchmark that corresponds to an asymptotically optimal parameter setting and outperforms other mechanisms for multimodal optimization like heavy-tailed mutation. We also investigate the module in the context of a self-adjusting (1+λ\lambda) EA and show that it combines the previous benefits of this algorithm on unimodal problems with more efficient multimodal optimization. To explore the limitations of the approach, we additionally present an example where both self-adjusting mechanisms, including stagnation detection, do not help to find a beneficial setting of the mutation rate. Finally, we investigate our module for stagnation detection experimentally.Comment: 26 pages. Full version of a paper appearing at GECCO 202

    MMAS Versus Population-Based EA on a Family of Dynamic Fitness Functions

    Get PDF

    When move acceptance selection hyper-heuristics outperform Metropolis and elitist evolutionary algorithms and when not

    Get PDF
    Selection hyper-heuristics (HHs) are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently, selection HHs choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise standard unimodal benchmark functions from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper, we extend our understanding of the performance of HHs to the domain of multimodal optimisation by considering a Move Acceptance HH (MAHH) from the literature that can switch between elitist and non-elitist heuristics during the run. In essence, MAHH is a non-elitist search heuristic that differs from other search heuristics in the source of non-elitism. We first identify the range of parameters that allow MAHH to hillclimb efficiently and prove that it can optimise the standard hillclimbing benchmark function OneMax in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where MAHH outperforms elitist evolutionary algorithms and the well-known Metropolis non-elitist algorithm by quickly escaping local optima, and ones where it does not. Since MAHH is essentially a non-elitist random local search heuristic, the paper is of independent interest to researchers in the fields of artificial intelligence and randomised search heuristics

    Theoretical results on bet-and-run as an initialisation strategy

    Get PDF
    Bet-and-run initialisation strategies have been experimentally shown to be beneficial on classical NP-complete problems such as the travelling salesperson problem and minimum vertex cover. We analyse the performance of a bet-and-run restart strategy, where k independent islands run in parallel for t1 iterations, after which the optimisation process continues on only the best-performing island. We define a family of pseudo-Boolean functions, consisting of a plateau and a slope, as an abstraction of real fitness landscapes with promising and deceptive regions. The plateau shows a high fitness, but does not allow for further progression, whereas the slope has a low fitness initially, but does lead to the global optimum. We show that bet-and-run strategies with non-trivial k and t1 are necessary to find the global optimum efficiently. We show that the choice of t1 is linked to properties of the function. Finally, we provide a fixed budget analysis to guide selection of the bet-and-run parameters to maximise expected fitness after t = k · t1 + t2 fitness evaluations

    The Impact of a Sparse Migration Topology on the Runtime of Island Models in Dynamic Optimization

    Get PDF
    Island models denote a distributed system of evolutionary algorithms which operate independently, but occasionally share their solutions with each other along the so-called migration topology. We investigate the impact of the migration topology by introducing a simplified island model with behavior similar to (Formula presented.) islands optimizing the so-called Maze fitness function (Kötzing and Molter in Proceedings of parallel problem solving from nature (PPSN XII), Springer, Berlin, pp 113–122, 2012). Previous work has shown that when a complete migration topology is used, migration must not occur too frequently, nor too soon before the optimum changes, to track the optimum of the Maze function. We show that using a sparse migration topology alleviates these restrictions. More specifically, we prove that there exist choices of model parameters for which using a unidirectional ring of logarithmic diameter as the migration topology allows the model to track the oscillating optimum through nMaze-like phases with high probability, while using any graph of diameter less than (Formula presented.) for some sufficiently small constant (Formula presented.) results in the island model losing track of the optimum with overwhelming probability. Experimentally, we show that very frequent migration on a ring topology is not an effective diversity mechanism, while a lower migration rate allows the ring topology to track the optimum for a wider range of oscillation patterns. When migration occurs only rarely, we prove that dense migration topologies of small diameter may be advantageous. Combined, our results show that the sparse migration topology is able to track the optimum through a wider range of oscillation patterns, and cope with a wider range of migration frequencies

    A Runtime Analysis of Parallel Evolutionary Algorithms in Dynamic Optimization

    Get PDF
    A simple island model with λλ islands and migration occurring after every ττ iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (1+λ)(1+λ) EA if τ=1τ=1 , i. e., migration occurs during every iteration. It is proved that even for an increased offspring population size up to λ=O(n1−ϵ)λ=O(n1−ϵ) , the (1+λ)(1+λ) EA is still not able to track the optimum of Maze. If the migration interval is chosen carefully, the algorithm is able to track the optimum even for logarithmic λλ . The relationship of τ,λτ,λ , and the ability of the island model to track the optimum is then investigated more closely. Finally, experiments are performed to supplement the asymptotic results, and investigate the impact of the migration topology
    corecore